4 research outputs found

    Memory Checking for Parallel RAMs

    Get PDF
    When outsourcing a database to an untrusted remote server, one might want to verify the integrity of contents while accessing it. To solve this, Blum et al. [FOCS `91] propose the notion of memory checking. Memory checking allows a user to run a RAM program on a remote server, with the ability to verify integrity of the storage with small local storage. In this work, we define and initiate the formal study of memory checking for Parallel RAMs (PRAMs). The parallel RAM model is very expressive and captures many modern architectures such as multi-core architectures and cloud clusters. When multiple clients run a PRAM algorithm on a shared remote server, it is possible that there are concurrency issues that cause inconsistencies. Therefore, integrity verification is even more desirable property in this setting. Assuming only the existence of one-way functions, we construct an online memory checker (one that reports faults as soon as they occur) for PRAMs with O(logN)O(\log N) simulation overhead in both work and depth. In addition, we construct an offline memory checker (one that reports faults only after a long sequence of operations) with amortized O(1)O(1) simulation overhead in both work and depth. Our constructions match the best known simulation overhead of the memory checkers in the standard single-user RAM setting. As an application of our parallel memory checking constructions, we additionally construct the first maliciously secure oblivious parallel RAM (OPRAM) with polylogarithmic overhead

    MacORAMa: Optimal Oblivious RAM with Integrity

    Get PDF
    Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM `96), is a primitive that allows a client to perform RAM computations on an external database without revealing any information through the access pattern. For a database of size NN, well-known lower bounds show that a multiplicative overhead of Ω(logN)\Omega(\log N) in the number of RAM queries is necessary assuming O(1)O(1) client storage. A long sequence of works culminated in the asymptotically optimal construction of Asharov, Komargodski, Lin, and Shi (CRYPTO `21) with O(logN)O(\log N) worst-case overhead and O(1)O(1) client storage. However, this optimal ORAM is known to be secure only in the honest-but-curious setting, where an adversary is allowed to observe the access patterns but not modify the contents of the database. In the malicious setting, where an adversary is additionally allowed to tamper with the database, this construction and many others in fact become insecure. In this work, we construct the first maliciously secure ORAM with worst-case O(logN)O(\log N) overhead and O(1)O(1) client storage assuming one-way functions, which are also necessary. By the Ω(logN)\Omega(\log N) lower bound, our construction is asymptotically optimal. To attain this overhead, we develop techniques to intricately interleave online and offline memory checking for malicious security. Furthermore, we complement our positive result by showing the impossibility of a generic overhead-preserving compiler from honest-but-curious to malicious security, barring a breakthrough in memory checking

    Listing, Verifying and Counting Lowest Common Ancestors in DAGs: Algorithms and Fine-Grained Lower Bounds

    Get PDF
    The AP-LCA problem asks, given an nn-node directed acyclic graph (DAG), to compute for every pair of vertices uu and vv in the DAG a lowest common ancestor (LCA) of uu and vv if one exists. In this paper we study several interesting variants of AP-LCA, providing both algorithms and fine-grained lower bounds for them. The lower bounds we obtain are the first conditional lower bounds for LCA problems higher than nωo(1)n^{\omega-o(1)}, where ω\omega is the matrix multiplication exponent. Some of our results include: - In any DAG, we can detect all vertex pairs that have at most two LCAs and list all of their LCAs in O(nω)O(n^\omega) time. This algorithm extends a result of [Kowaluk and Lingas ESA'07] which showed an O~(nω)\tilde{O}(n^\omega) time algorithm that detects all pairs with a unique LCA in a DAG and outputs their corresponding LCAs. - Listing 77 LCAs per vertex pair in DAGs requires n3o(1)n^{3-o(1)} time under the popular assumption that 3-uniform 5-hyperclique detection requires n5o(1)n^{5-o(1)} time. This is surprising since essentially cubic time is sufficient to list all LCAs (if ω=2\omega=2). - Counting the number of LCAs for every vertex pair in a DAG requires n3o(1)n^{3-o(1)} time under the Strong Exponential Time Hypothesis, and nω(1,2,1)o(1)n^{\omega(1,2,1)-o(1)} time under the 44-Clique hypothesis. This shows that the algorithm of [Echkardt, M\"{u}hling and Nowak ESA'07] for listing all LCAs for every pair of vertices is likely optimal. - Given a DAG and a vertex wu,vw_{u,v} for every vertex pair u,vu,v, verifying whether all wu,vw_{u,v} are valid LCAs requires n2.5o(1)n^{2.5-o(1)} time assuming 3-uniform 4-hyperclique requires n4o(1)n^{4 - o(1)} time. This defies the common intuition that verification is easier than computation since returning some LCA per vertex pair can be solved in O(n2.447)O(n^{2.447}) time [Grandoni et al. SODA'21].Comment: To appear in ICALP 2022. Abstract shortened to fit arXiv requiremen
    corecore